The EU AI Act: Essentials for data protection and privacy pros
Posted: March 25, 2025
The EU AI Act works differently from the laws more familiar to data protection and privacy teams, such as the GDPR and the ePrivacy Directive.
The AI Act is essentially a product safety law. While it protects fundamental rights in many ways, it does so by ensuring that AI systems – and products containing AI systems – meet specific standards regarding their quality, their accompanying documentation, and the accountability of their operators.
This guide is an overview of the AI Act intended to help data protection and privacy professionals navigate the complexities of the AI Act and understand its implications for their work:
Types of operators under the AI Act
The AI Act applies differently to different types of people, known collectively as “operators”. Here are some of the most important types of operators under the AI Act.
- Provider: Essentially, the developer of an AI system General Purpose AI (GPAI) model.
- Deployer: The “user” of an AI system—for example, a university that implements an AI system to scan course applications (rather than the individual person using the AI system on the university’s behalf).
- Distributor: An actor in the AI supply chain that makes an AI system available in the EU—other than the provider, the deployer, or an “importer” (an importer is similar to a distributor, except that it brings a non-EU AI system into the EU).
Definition of an AI system under the AI Act
The AI Act defines an “AI system” quite broadly. Here are the seven core elements of the “AI system” definition:
- A machine-based system
- Operates with varying levels of autonomy
- May adapt after deployment
- Has explicit or implicit objectives
- Infers from inputs how to generate outputs
- Generates outputs such as predictions, content, recommendations or decisions
- Can influence physical or virtual environments
Broadly speaking, this definition covers:
- “Machine learning”-based approaches, where the system “learns” by identifying patterns in large quantities of data, and
- Rule and knowledge-based approaches, where the system processes data based on rules determined before it is deployed.
But not all AI systems are subject to regulation under the AI Act.
Types of AI systems
The AI Act is a risk-based regulation that applies different rules and obligations depending on the risks arising from an AI system’s deployment. Here’s a broad overview of the types of AI system regulated by the AI Act:
Prohibited AI practices
Some AI use cases are banned outright under the AI Act. Here’s a broad overview of the AI Act’s “prohibited practices”:
- Subliminal techniques or purposefully manipulative or deceptive techniques to distort behavior and cause significant harm.
- Exploitation of vulnerabilities due to age, disability, or socio-economic situation to distort behavior and cause significant harm.
- Social scoring, evaluating or classifying people based on social behavior or personal characteristics, leading to detrimental treatment.
- Conducting risk assessments of people to predict criminal offences based solely on profiling or assessing personality traits.
- Untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.
- Inferring emotions in the workplace and educational institutions, except for medical or safety reasons.
- Biometric categorization systems that infer race, political opinions, trade union membership, religion, sex life, or sexual orientation.
- Real-time remote biometric identification systems in publicly accessible spaces for law enforcement, unless strictly necessary for specific objectives like searching for victims of serious crimes or preventing imminent threats.
High-risk AI systems
High-risk AI systems trigger many compliance and risk management obligations throughout the supply chain.
There are two broad categories of high-risk AI system:
- High-risk AI systems covered by existing EU product safety laws, such as:
- Safety components in regulated products (e.g., machinery, medical devices, vehicles)
- Risk management in critical sectors (e.g., aviation, healthcare, energy)
- High-risk AI systems designated as such by the European Commission, used in areas such as:
- Biometric identification and categorization
- Management of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to essential private and public services
- Law enforcement activities
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Systems requiring transparency
Some AI systems are not “high-risk” but carry a risk of deceiving or misleading people. These types of systems (listed at Article 50 of the AI Act) require additional transparency measures—either from the provider or the deployer.
- AI systems interacting directly with humans (chatbots), unless obviously identifiable by a reasonable observer.
- AI systems generating synthetic audio, images, videos, or text.
- Emotion recognition systems.
- Certain biometric categorization systems.
- AI systems creating or manipulating “deepfake” content.
- AI systems generating or manipulating text published for public information.
General Purpose AI (GPAI) models
The AI Act also applies to “GPAI models” and “GPAI models with systemic risk”.
GPAI models are not AI systems but rather AI models that can underpin many different types of AI systems. Examples include OpenAI’s Generative Pre-trained Transformer (GPT) series, Google Gemini, and Anthropic’s Claude model.
Providers of GPAI models and GPAI models with systemic risks also have many obligations under the AI Act and may need to register with the European Commission.
Remember: The GDPR applies to AI
While the EU AI Act has many important implications for the use and development of AI systems, it only applies in certain contexts.
For most data protection professionals, the GDPR or UK GDPR will continue to be the most important law regulating the use of AI products.
Most AI systems use personal data at all stages of their lifecycle – they require personal data for training and fine-tuning, they receive personal data as inputs, and they produce personal data as outputs.
Applying the GDPR’s principles and rules will often be the most important way a data protection professional can contribute to managing risk and safeguarding individual rights in the context of AI.